Sparse recovery by reduced variance stochastic approximation

نویسندگان

چکیده

Abstract In this paper, we discuss application of iterative Stochastic Optimization routines to the problem sparse signal recovery from noisy observation. Using Mirror Descent algorithm as a building block, develop multistage procedure for solutions under assumption smoothness and quadratic minoration on expected objective. An interesting feature proposed is linear convergence approximate solution during preliminary phase routine when component stochastic error in gradient observation, which due bad initial approximation optimal solution, larger than ‘ideal’ asymptotic owing observation noise ‘at solution’. We also show how one can straightforwardly enhance reliability corresponding using Median-of-Means-like techniques. illustrate performance algorithms classical problems low-rank signals generalized regression framework. show, rather weak regressor distributions, they lead parameter estimates obey (up factors are logarithmic dimension confidence level) best known accuracy bounds.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Variance Reduced Optimization for Nonconvex Sparse Learning

We propose a stochastic variance reduced optimization algorithm for solving a class of large-scale nonconvex optimization problems with cardinality constraints. Theoretically, we provide sufficient conditions under which the proposed algorithm enjoys strong linear convergence guarantees and optimal estimation accuracy in high dimensions. We further extend the analysis to its asynchronous varian...

متن کامل

Stochastic Variance-Reduced ADMM

The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAGADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an inte...

متن کامل

Accelerated Variance Reduced Stochastic ADMM

Recently, many variance reduced stochastic alternating direction method of multipliers (ADMM) methods (e.g. SAGADMM, SDCA-ADMM and SVRG-ADMM) have made exciting progress such as linear convergence rates for strongly convex problems. However, the best known convergence rate for general convex problems is O(1/T ) as opposed to O(1/T ) of accelerated batch algorithms, where T is the number of iter...

متن کامل

Riemannian stochastic variance reduced gradient

Stochastic variance reduction algorithms have recently become popular for minimizing the average of a large but finite number of loss functions. In this paper, we propose a novel Riemannian extension of the Euclidean stochastic variance reduced gradient algorithm (R-SVRG) to a manifold search space. The key challenges of averaging, adding, and subtracting multiple gradients are addressed with r...

متن کامل

Stochastic Variance Reduced Riemannian Eigensolver

We study the stochastic Riemannian gradient algorithm for matrix eigendecomposition. The state-of-the-art stochastic Riemannian algorithm requires the learning rate to decay to zero and thus suffers from slow convergence and suboptimal solutions. In this paper, we address this issue by deploying the variance reduction (VR) technique of stochastic gradient descent (SGD). The technique was origin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Information and Inference: A Journal of the IMA

سال: 2022

ISSN: ['2049-8772', '2049-8764']

DOI: https://doi.org/10.1093/imaiai/iaac028